Generalization error for Tweedie models: decomposition and error reduction with bagging

نویسندگان

چکیده

Wüthrich and Buser (DOI:10.2139/ssrn.2870308, 2020) studied the generalization error for Poisson regression models. This short note aims to extend their results Tweedie family of distributions, which law belongs. In case bagging, a new condition emerges that becomes increasingly binding with power parameter involved in variance function.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generalization error bounds for stationary autoregressive models

We derive generalization error bounds for stationary univariate autoregressive (AR) models. We show that imposing stationarity is enough to control the Gaussian complexity without further regularization. This lets us use structural risk minimization for model selection. We demonstrate our methods by predicting interest rate movements.

متن کامل

Error reduction in EMG signal decomposition.

Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061...

متن کامل

TESTING FOR AUTOCORRELATION IN UNEQUALLY REPLICATED FUNCTIONAL MEASUREMENT ERROR MODELS

In the ordinary linear models, regressing the residuals against lagged values has been suggested as an approach to test the hypothesis of zero autocorrelation among residuals. In this paper we extend these results to the both equally and unequally replicated functionally measurement error models. We consider the equally and unequally replicated cases separately, because in the first case the re...

متن کامل

Error Reduction for Extractors

We present a general method to reduce the error of any extractor. Our method works particularly well in the case that the original extractor extracts up to a constant fraction of the source min-entropy and achieves a polynomially small error. In that case, we are able to reduce the error to (almost) any ", using only O(log(1=")) additional truly random bits (while keeping the other parameters o...

متن کامل

Training Error, Generalization Error and Learning Curves in Neural Learning

A ,neural ‘nM?tUJOrk is tro.in,ed by using CL set of ~Z’lJaib nble exn.m,ples to minimize the twining error sv.ch th,nt the network pnra,meters ,fit the eznmples well. However, it is desired to min.imize the generalization error to which no direct access is possible. There are discrepa,ncies between the training error an.d the gen,eralization error due to th,e statistical fluctuation of exnmple...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: European Actuarial Journal

سال: 2021

ISSN: ['2190-9733', '2190-9741']

DOI: https://doi.org/10.1007/s13385-021-00265-2